Goto

Collaborating Authors

 imagination technology


Fast Local Neural Regression for Low-Cost, Path Traced Lambertian Global Illumination

Salmi, Arturo, Cséfalvay, Szabolcs, Imber, James

arXiv.org Artificial Intelligence

Despite recent advances in hardware acceleration of ray tracing, real-time ray budgets remain stubbornly limited at a handful of samples per pixel (spp) on commodity hardware, placing the onus on denoising algorithms to achieve high visual quality for path traced global illumination. Neural network-based solutions give excellent result quality at the cost of increased execution time relative to hand-engineered methods, making them less suitable for deployment on resource-constrained systems. We therefore propose incorporating a neural network into a computationally-efficient local linear model-based denoiser, and demonstrate faithful single-frame reconstruction of global illumination for Lambertian scenes at very low sample counts (1spp) and for low computational cost. Other contributions include improving the quality and performance of local linear model-based denoising through a simplified mathematical treatment, and demonstration of the surprising usefulness of ambient occlusion as a guide channel. We also show how our technique is straightforwardly extensible to joint denoising and upsampling of path traced renders with reference to low-cost, rasterized guide channels.


Neural network core is optimised for robotaxis

#artificialintelligence

Imagination Technologies has launched a scalable neural network accelerator IP core optimised for automotive and autonomous systems but also aimed at industrial designs. The Series4 Neural Network Accelerator (NNA) core has been optimised for the YOLOv3 neural network framework, for processing large, rectangular images, rather than a general purpose execution unit. It is aimed at developer of system-on-chip devices for sensor fusion in high performance autonomous vehicles such as robotaxis, last mile delivery and automated street sweepers. The NNA core achieves 12.5TOPS of performance through 4096 multiply accumulate (MAC) units in 1mm2 on a 5nm process technology, all connected by a 256 network on chip (NOC). This that is over 20x faster than an embedded GPU and 1000x faster than an embedded CPU for AI inference says the company.


The use of AI and ML in protecting the IoT

#artificialintelligence

For the last few years, internet security has been based on a combination of anti-virus software, isolation techniques and encryption software. Government bodies and security companies would track traffic on the internet and look for suspicious materials based upon their signature. These techniques focused on running anti-malware software after the facts. They enabled the segregation between good data and malware. But if malware was undetected, it could lurk in the background of systems for months or even years and become active later in time. The consumer world is rapidly changing.


Google goes peacenik, chip wizardry and AI gets into art and drugs

#artificialintelligence

While we've already covered a lot of AI stories this week a few slipped under the radar so, as is traditional, here's the roundup of some news you may have missed. Google turns peacenik: Under fire for helping the US military use AI to better bomb people, Google has not only stepped away from that particular Pentagon contract but also this week released a set of seven principles for the development of AI. CEO Sundar Pichai made a point of noting that they are "not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions." And they are pretty good: be socially beneficial (the AI equivalent of'Do no evil'?); They also sparked Twitter's CEO Jack Dorsey to ponder out loud whether they were something the tech industry as a whole could get around.


What users could expect from Apple's homegrown GPUs for iPhones, iPads

#artificialintelligence

Apple has one big reason to move to a homegrown GPU: It wants full control over the hardware and software in its devices. The device maker is apparently developing its own GPU from scratch after dumping Imagination Technologies' PowerVR architecture, which is being used in the iPhone 7. The smartphone runs on the PowerVR A10 Fusion chip. It's not certain when Apple's homegrown GPU will appear in devices, and the company didn't respond to request for comment. Apple has made graphics improvement a priority in its iPhone and iPad models, so users should get better gaming experiences. The homegrown GPU could also boost artificial intelligence capabilities on Apple's devices, and also bring on board features like image recognition.


Future iPhones to get 4K with new PowerVR graphics architecture

PCWorld

A new PowerVR graphics architecture from Imagination Technologies will give a serious graphics boost to Apple's future iPhones, including 4K graphics. Imagination is announcing Furian, the first major graphics architecture upgrade since Rogue, which was announced in 2010. Apple's iPhone 7 currently has graphics based on the Rogue architecture. The Furian architecture also sets up future iPhones for graphics-intensive applications like virtual reality. Furian will be used in new PowerVR GPUs like the Series8XT, according to Imagination.


Always getting smarter? The trends of CES 2017 - Imagination Technologies

#artificialintelligence

With the dust firmly settled on this year's CES we thought we'd take a look back at the show to see what stood out in terms of overall trends, with AI, security, connected cars, VR, AR and drones all making their mark. Did this year's show have the X-Factor? Well, to be honest it was probably more'The Voice'. The big shout, so to speak turned out to be Amazon's Alexa voice assistant, which seems to have broken out of its Echo cage and made its way into a wide variety of devices, from a number of third-party speakers, to'smart' fridge's and autonomous cleaning robots . However, while Alexa is the current poster child for smart AI, it has clearly has a long way to go to becoming truly smart.